Fix: node to use chunking for uploads#1291
Conversation
WalkthroughNode language type for file parameters changed from Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
📜 Recent review detailsConfiguration used: Organization UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (2)
🚧 Files skipped from review as they are similar to previous changes (2)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (18)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In @templates/node/src/inputFile.ts.twig:
- Around line 2-4: The module imports Node fs APIs at top-level (basename,
realpathSync, promises as fs) which breaks edge runtimes; change to
conditional/dynamic loading: remove top-level fs/realpathSync imports and
instead import or require them inside methods that need filesystem access (e.g.,
in InputFile.fromPath, InputFile.fromFileSystem) guarded by a runtime check/flag
similar to client.ts.twig, and provide safe fallbacks for edge environments
(e.g., have fromBuffer remain usable without fs, have realpathSync return the
input path or undefined in edge, and throw a clear error when file-system-only
methods are invoked in an edge runtime). Ensure references to
basename/realpathSync/fs are only used after the runtime check or dynamic import
so edge environments never attempt to load the Node fs module.
🧹 Nitpick comments (1)
templates/node/src/client.ts.twig (1)
217-261: LGTM - consider extracting shared chunking logic.The
InputFilechunked upload implementation correctly handles size detection, chunk slicing, progress tracking, and header propagation. The async/await usage withInputFilemethods is appropriate.The chunking logic between
InputFile(lines 217-261) andFile(lines 263-301) is largely duplicated. Consider extracting a shared helper, though this is optional given the different sync/async APIs.
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
src/SDK/Language/Node.phptemplates/node/src/client.ts.twigtemplates/node/src/inputFile.ts.twigtemplates/node/src/services/template.ts.twig
🔇 Additional comments (6)
src/SDK/Language/Node.php (1)
76-77: LGTM!The union type
File | InputFilecorrectly allows both the standardFiletype and the newInputFiletype for file parameters, enabling the chunked upload functionality while maintaining backward compatibility.templates/node/src/services/template.ts.twig (1)
4-15: LGTM!The conditional import logic correctly detects file parameters across all service methods and only imports
InputFilewhen necessary, keeping the generated code clean and avoiding unused imports.templates/node/src/client.ts.twig (1)
209-211: LGTM!The file detection correctly handles both
FileandInputFileinstances, enabling the chunked upload to work with either type.templates/node/src/inputFile.ts.twig (3)
25-50: LGTM!The
fromBufferfactory correctly handles multiple input types with appropriate type detection:
- BlobLike via duck-typing (
arrayBuffermethod check)- Buffer instances
- Strings (encoded to Buffer)
- ArrayBuffer (wrapped in Buffer)
- TypedArrays (extracted with proper offset/length)
The fallback error for unsupported types provides clear feedback.
72-93: LGTM!The
slicemethod efficiently handles all source types:
- Path-based: Uses file handles with proper cleanup via
try/finally, and correctly handles partial reads- Buffer: Uses
subarrayfor zero-copy slicing- Blob: Converts slice to Buffer via
arrayBufferThis enables true streaming for file-based sources without loading the entire file into memory.
95-109: LGTM!The
toFileandtoBuffermethods correctly materialize the content for scenarios requiring full file access, such as small uploads that bypass chunking.
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (3)
tests/Node18Test.php (1)
30-33: Consider documenting the expected chunk count.The test now expects exactly 4 large file upload responses, which aligns with the PR's chunking implementation. However, the hardcoded count of 4 chunks lacks context. If the test file size or CHUNK_SIZE constant changes, this expectation could silently break.
Consider adding a comment explaining why 4 chunks are expected (e.g., test file size ÷ chunk size), or better yet, verifying that the test file and chunk size configuration are documented to produce exactly 4 chunks.
💡 Optional: Improve comment clarity
- ...Base::LARGE_FILE_RESPONSES, // Large file uploads + ...Base::LARGE_FILE_RESPONSES, // Large file upload (4 chunks expected)tests/Node20Test.php (1)
30-33: Consider documenting the expected chunk count.The test expects exactly 4 large file upload responses for chunked uploads. While this aligns with the PR's implementation, the hardcoded count lacks documentation about why 4 chunks are expected.
Consider adding context about the test file size and chunk size that result in exactly 4 chunks, or document this relationship to prevent silent test failures if configurations change.
💡 Optional: Improve comment clarity
- ...Base::LARGE_FILE_RESPONSES, // Large file uploads + ...Base::LARGE_FILE_RESPONSES, // Large file upload (4 chunks expected)Note: The change is consistent across all Node version tests (16, 18, 20), which is good for maintainability.
tests/Node16Test.php (1)
30-33: Consider documenting the expected chunk count and reducing duplication.The test expects exactly 4 large file upload responses for chunked uploads, matching the pattern in Node18Test and Node20Test. While consistency across Node versions is excellent, the hardcoded count of 4 lacks documentation.
Additionally, since all three Node test classes (16, 18, 20) have identical chunk count expectations, consider:
- Documenting why 4 chunks are expected (test file size relationship to CHUNK_SIZE)
- Defining a constant in the Base class (e.g.,
LARGE_FILE_CHUNK_COUNT = 4) to make the expectation explicit and easier to maintain♻️ Optional: Improve comment clarity and reduce duplication
Option 1: Improve the comment
- ...Base::LARGE_FILE_RESPONSES, // Large file uploads + ...Base::LARGE_FILE_RESPONSES, // Large file upload (4 chunks expected)Option 2: Use a constant in Base class
In
tests/Base.php, add:protected const LARGE_FILE_CHUNK_COUNT = 4; // Large file test expects 4 chunks based on file size and CHUNK_SIZEThen in all Node test files:
...array_fill(0, Base::LARGE_FILE_CHUNK_COUNT, ...Base::LARGE_FILE_RESPONSES), // Large file upload chunks
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
tests/Node16Test.phptests/Node18Test.phptests/Node20Test.php
🧰 Additional context used
🧬 Code graph analysis (2)
tests/Node18Test.php (1)
tests/Base.php (1)
Base(17-346)
tests/Node20Test.php (1)
tests/Base.php (1)
Base(17-346)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (17)
- GitHub Check: build (8.3, Python39)
- GitHub Check: build (8.3, WebChromium)
- GitHub Check: build (8.3, PHP80)
- GitHub Check: build (8.3, Ruby30)
- GitHub Check: build (8.3, Node20)
- GitHub Check: build (8.3, PHP83)
- GitHub Check: build (8.3, WebNode)
- GitHub Check: build (8.3, Go118)
- GitHub Check: build (8.3, KotlinJava17)
- GitHub Check: build (8.3, Android5Java17)
- GitHub Check: build (8.3, DartBeta)
- GitHub Check: build (8.3, Android14Java17)
- GitHub Check: build (8.3, DotNet90)
- GitHub Check: build (8.3, FlutterBeta)
- GitHub Check: apple (client)
- GitHub Check: swift (server)
- GitHub Check: android (client)
Resolve Node upload template conflicts and keep InputFile edge-safe.
Greptile SummaryThis PR replaces the old eager Confidence Score: 5/5Safe to merge — no P0/P1 issues; all findings are non-blocking P2 style suggestions. The chunked upload logic is correct and mirrors the existing File path. The lazy fs loading and edge-runtime guard are implemented correctly. All remaining comments are future-proofing and style concerns that do not affect current correctness. templates/node/src/inputFile.ts.twig — switch exhaustiveness and repeated getFs() import are worth addressing in a follow-up. Important Files Changed
Reviews (2): Last reviewed commit: "Guard typed arrays before blob-like Inpu..." | Re-trigger Greptile |
What does this PR do?
This updates the generated Node SDK upload path so file parameters can be passed as either the standard
Filetype or the SDK-providedInputFilehelper.Before this change,
InputFile.fromPath()eagerly read the whole file into memory before the request was sent. That made large uploads expensive and bypassed the client's chunked upload flow. This PR keeps path-backed files as lazyInputFilesources, lets the Node client detect them inchunkedUpload(), and reads only the active chunk when a file is larger than the configured chunk size.Implementation details
File | InputFile.(File | InputFile)[]instead of binding onlyInputFile[]to the array suffix.hasFileParamTwig filter so generated service files importInputFileonly when the service has file parameters.templates/node/src/inputFile.ts.twigsoInputFilecan be backed by:fsonly when size/slice/materialization is needed;Uint8Array,ArrayBuffer, or string;arrayBuffer()andslice().Client.chunkedUpload()to handle bothFileandInputFileinputs.masterJSON bigint handling in the Node client while resolving merge conflicts.fs/pathimports inInputFile, so importing the helper does not immediately break edge-style runtimes. Path-based filesystem methods now throw a clear runtime error when filesystem access is unavailable.Example usage
Path-backed uploads now stay lazy and are chunked without loading the full file into memory:
Buffer,
ArrayBuffer,Uint8Array, string, and Blob-like inputs are also supported:Existing native
Fileinputs continue to work:Edge-style runtimes can import and use non-filesystem helpers. Filesystem-backed helpers fail only when they are actually used:
Conflict resolution
This branch was refreshed against current
master(d280f89c3). The conflicts were in:src/SDK/Language/Node.phptemplates/node/src/client.ts.twigThe Node type override was reduced to only customize file parameters and delegate all other type resolution to
Web, so currentmastertype behavior such as int64 and model handling is preserved.Test Plan
docker run --rm -v $(pwd):/app -w /app php:8.3-cli php example.php nodephp vendor/bin/phpunit --filter Node20Testnode -e "globalThis.EdgeRuntime='edge'; const { InputFile } = require('./tests/sdks/node/dist/inputFile.js'); Promise.resolve().then(async () => { const file = InputFile.fromPlainText('hello', 'hello.txt'); const size = await file.size(); if (size !== 5) throw new Error('unexpected size ' + size); try { InputFile.fromPath('/tmp/file.txt'); throw new Error('fromPath did not throw'); } catch (error) { if (!String(error.message).includes('edge runtimes')) throw error; } console.log('edge inputfile smoke:passed'); }).catch((error) => { console.error(error); process.exit(1); });"composer lint-twigphp -l src/SDK/Language/Node.phpgit diff --checkRelated PRs and Issues
Have you read the Contributing Guidelines on issues?
Yes.